Announcement

Collapse
No announcement yet.

Any chance supporting ntel® Xeon Phi™ Coprocessors?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Any chance supporting ntel® Xeon Phi™ Coprocessors?

    Hej,

    The battle for the setup is still on. Have you guys seen this one?
    Is there any chance that vray will support these? (even sell with?)

  • #2
    Here is the link:

    http://ark.intel.com/products/71992/...53-ghz-60-core

    Comment


    • #3
      We have done some preliminary tests with this; it sort of works, but there are many issues still to solve, ranging from compiler bugs to hardware issues to specific code optimizations.

      Best regards,
      Vlado
      I only act like I know everything, Rogers.

      Comment


      • #4
        final decision on it, that you will support or not, or too early to say it?
        THX!

        Comment


        • #5
          We have not made a final decision yet; we are still looking into it.

          Best regards,
          Vlado
          I only act like I know everything, Rogers.

          Comment


          • #6
            All right I will check back in 2 months then... xD I know one person looking into a possibility too, he is not commercial renderer though..

            What do you think about the new ati cards? 5 teraflops. (lol) How much utilizable? Is openCl is still difficult?
            From technical perspective it seems incredible.

            Did you thought about writing an api for the cards to utilize it better. (Lots of work ?)

            Man I sit behind comps for 25 years, at least one year just waiting the renders to finish. I am old, I want a close real time solution... Any chance you see this becoming a reality with vray(X)

            Comment


            • #7
              ATI drivers are bad. Until they solve it Vray will suffer on ATI cards.
              CGI - Freelancer - Available for work

              www.dariuszmakowski.com - come and look

              Comment


              • #8
                Originally posted by losbellos73 View Post
                Man I sit behind comps for 25 years, at least one year just waiting the renders to finish. I am old, I want a close real time solution... Any chance you see this becoming a reality with vray(X)
                There was a calculation by Intel once, that for a truly real-time raytracing solution you would need 100 to 1000 times the power of current hardware. Make your own estimates when that will happen and how much it will cost. Of course, it is possible to get such speeds even today, if you are prepared to spend many thousands of dollars on hardware.

                As far as AMD cards are concerned, I don't know if we need to do anything on our side. After all, nVidia cards work fine, so it is AMD's task to make their cards work fine and fast too.

                Best regards,
                Vlado
                I only act like I know everything, Rogers.

                Comment


                • #9
                  Originally posted by vlado View Post
                  There was a calculation by Intel once, that for a truly real-time raytracing solution you would need 100 to 1000 times the power of current hardware. Make your own estimates when that will happen and how much it will cost.

                  Best regards,
                  Vlado
                  Yes Vlado, it's a wild guess as to when real-time Ray Tracing will be with us. It seems we will be waiting for some kind of breakthrough with hardware before it becomes a reality. CPU's have hit a brick-wall when it comes to faster speeds as they have pretty much reached the physical limit of the current technology.
                  Things have been moving very fast though with the new material for the future [graphene] & hopefully we will soon be having huge leaps in speed/performance again with a completely new generation of computers that will make current ones look like slow pocket calculators.[and silicon will become a part of our museums]
                  Lets keep hoping R&D scientists keep getting funding to make our life better!

                  Comment


                  • #10
                    Have there been any updates to Vray supporting PHI? Are you seeing significant decreases in render times with PHI?

                    Comment


                    • #11
                      Originally posted by tvicari View Post
                      Have there been any updates to Vray supporting PHI? Are you seeing significant decreases in render times with PHI?
                      There's unfortunately not very good news here. After a lot of tinkering with Phi-specific code, we managed to get the performance of our test raytracer to just about match that of the host machine (a dual Xeon E5, I think). The results were not too encouraging when running the full V-Ray either. We could never get the OpenCL version of our code to work, but you can look here for some OpenCL benchmarks (and remember that OpenCL is in general slower than CUDA on nVidia GPUs): http://compubench.com/compare.jsp?co...fig_1=15887974

                      Best regards,
                      Vlado
                      I only act like I know everything, Rogers.

                      Comment


                      • #12
                        Vlado,

                        Thank you for the time researching the Xeon Phi option. I have been watching for this forum entry and I was waiting for your word on the tests, now that is out, I have some though about it: If you say that after a lot of tinkering with the code you got the phi to render at about the same speed of a dual Xeon e5, that doesnt sound too bad. Actually I never thought it would be much more than that. The way I see, if you can double the render speed of a high end machine by just adding a card that costs around 4000.00, it seems like a really good deal for me, considering that the host machine (or a equally equipped render node) wouldnt cost less than 7k or so. Add to that all the fact that you could add 3 of those on one machine, plus leave space for your GPU, you could have a mini render farm, equivalent to 4 top notch machines for around 20k and that would be using the full version of vray. GPUs are great for testing, they are much faster, but they still dont provide all the features of vray on CPU and there are all the memory issues, it seems that those issues wont be solved soon. Moore law is not really helping much lately when it comes to CPU, so Xeon Phi still seem to me like a solid alternative. Unless you think that in the near future, GPU render will replace most of the CPU tasks... Of course, if programming for Phi would mean opening a 3rd version of vray (CPU, GPU and PHI) I understand that it would slow down the development of the new tools we all crave and love. Those were just my 2 cents to the conversation, thanks a lot for all the hard work from the Vray team. cheers and happy new year.
                        Rodrigo Washington

                        Comment


                        • #13
                          http://www.workstationspecialist.com...n_phi/phi3100/
                          I have a chat with this dealer.
                          They sale it but they do not advice to used them for rendering (whatever the rendering engine).

                          Comment

                          Working...
                          X